Goto

Collaborating Authors

 universal semi-supervised learning


Universal Semi-Supervised Learning

Neural Information Processing Systems

Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i.e., class set) and feature distribution (i.e., feature domain) are different between labeled dataset and unlabeled dataset. Such a problem seriously hinders the realistic landing of classical SSL. Different from the existing SSL methods targeting at the open-set problem that only study one certain scenario of class distribution mismatch and ignore the feature distribution mismatch, we consider a more general case where a mismatch exists in both class and feature distribution. In this case, we propose a ''Class-shAring data detection and Feature Adaptation'' (CAFA) framework which requires no prior knowledge of the class relationship between the labeled dataset and unlabeled dataset. Particularly, CAFA utilizes a novel scoring strategy to detect the data in the shared class set. Then, it conducts domain adaptation to fully exploit the value of the detected class-sharing data for better semi-supervised consistency training. Exhaustive experiments on several benchmark datasets show the effectiveness of our method in tackling open-set problems.


Supplementary Material for Paper 1 " Universal Semi-Supervised Learning " 2

Neural Information Processing Systems

Moreover, we will conduct additional experiments to further evaluate our method in Section C. Furthermore, we provide the standard deviation results that correspond to the main paper in Section D. Finally, we will discuss the limitations and social impact of our method in Section E. VisDA2017 datasets, we set the batch size to 64. Other implementation details are presented below. It contains 3 domains: "Amazon" (A), "DSLR" (D), and "Webcam" (W), and each domain is composed of 31 classes. Shared learning rate decay factor 0.2 # training iteration in which learning rate decay starts 400,000 # training iteration in which consistency coefficient ramp up starts 200,000 Supervised Initial learning rate 0.003 Π-Model [6, 10] Initial learning rate 3 10 CAFA framework, which includes class-sharing data detection and feature adaptation . Here we use PI as the backbone method.



Universal Semi-Supervised Learning

Neural Information Processing Systems

Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i.e., class set) and feature distribution (i.e., feature domain) are different between labeled dataset and unlabeled dataset. Such a problem seriously hinders the realistic landing of classical SSL. Different from the existing SSL methods targeting at the open-set problem that only study one certain scenario of class distribution mismatch and ignore the feature distribution mismatch, we consider a more general case where a mismatch exists in both class and feature distribution. In this case, we propose a ''Class-shAring data detection and Feature Adaptation'' (CAFA) framework which requires no prior knowledge of the class relationship between the labeled dataset and unlabeled dataset. Particularly, CAFA utilizes a novel scoring strategy to detect the data in the shared class set.